Connectionist and Memory-Array Models of Artificial Grammar Learning
نویسنده
چکیده
Subjects exposed to strings of letters generated by a finite state grammar can later classify grammatical and nongrammatical test strings, even though they cannot adequately say what the rules of the grammar are (e.g., Reber, 1989). The MINERVA 2 (Hintzman, 1986) and Medin and Schaffer (1978) memory-array models and a number of connectionis? autoassociator models are tested against experimental data by derlving mainly parameter-free predictions from the models of the rank order of classification difficulty of test strings. The importance of different assumptions regarding the coding of features (How should the absence of a feature be coded? Should single letters or digrams be coded?), the learning rule used (Hebb rule vs. delta rule), and the connectivity (Should features be predicted only by previous features in the string, or by all features simultaneously?) is investigated by determlning the performance of the models with and without each assumption. Only one class of connectionist model (the simultaneous delta rule) passes all the tests. I? is shown that this class of model can be regarded by abstracting a se? of representative but incomplete rules of the grammar.
منابع مشابه
A Connectionist Model of Artificial Grammar Learning: Simulations Based on Higham (1997) Indexes of Knowledge Representation
One of the most vividly discussed issues in implicit learning literature is the representation of knowledge acquired during learning. The aim of this paper is to address this problem by presenting experimental data and connectionist simulation results of an artificial grammar learning task, using the three indexes proposed by Higham (1997). We demonstrate that a multilayer perceptron network ca...
متن کاملConnectionist sentence processing in perspective
The emphasis in the connectionist sentence-processing literature on distributed representation and emergence of grammar from such systems seems to have prevented connectionists and symbolists alike from recognizing the often close relations between their respective systems. This paper argues that simply recurrent network (SRN) models proposed by Jordan (1990) and Elman (1990) are more directly ...
متن کاملConnectionist perspectives on language learning, representation and processing.
The field of formal linguistics was founded on the premise that language is mentally represented as a deterministic symbolic grammar. While this approach has captured many important characteristics of the world's languages, it has also led to a tendency to focus theoretical questions on the correct formalization of grammatical rules while also de-emphasizing the role of learning and statistics ...
متن کاملSCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks
Previous approaches of analyzing spontaneously spoken language often have been based on encoding syntactic and semantic knowledge manually and symbolically. While there has been some progress using statistical or connectionist language models, many current spoken-language systems still use a relatively brittle, hand-coded symbolic grammar or symbolic semantic component. In contrast, we describe...
متن کاملEvaluation of Two Connectionist Approaches toStack
This study empirically compares two distributed connectionist learning models trained to represent an arbitrarily deep stack. One is Pol-lack's Recursive Auto-Associative Memory, a recurrent back propagating neural network that uses a hidden intermediate representation. The other is the Exponential Decay Model, a novel architecture that we propose here, which tries to learn an explicit represen...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Cognitive Science
دوره 16 شماره
صفحات -
تاریخ انتشار 1992